Goto

Collaborating Authors

 clip-t 0


BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

Neural Information Processing Systems

Our model is built using the pre-trained Stable Diffusion model trained on web-scraped datasets. Proper content moderation and regulation are highly advised to prevent undesirable consequence. In Figure 1, we outline common failure cases of the model. Subject images used for finetuning are shown on the left. We briefly introduce these methods below.




BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

Li, Dongxu, Li, Junnan, Hoi, Steven C. H.

arXiv.org Artificial Intelligence

Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Implementations will be made public.